12 research outputs found

    The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements

    Full text link
    Prior work has established the importance of integrating AI ethics topics into computer and data sciences curricula. We provide evidence suggesting that one of the critical objectives of AI Ethics education must be to raise awareness of AI harms. While there are various sources to learn about such harms, The AI Incident Database (AIID) is one of the few attempts at offering a relatively comprehensive database indexing prior instances of harms or near harms stemming from the deployment of AI technologies in the real world. This study assesses the effectiveness of AIID as an educational tool to raise awareness regarding the prevalence and severity of AI harms in socially high-stakes domains. We present findings obtained through a classroom study conducted at an R1 institution as part of a course focused on the societal and ethical considerations around AI and ML. Our qualitative findings characterize students' initial perceptions of core topics in AI ethics and their desire to close the educational gap between their technical skills and their ability to think systematically about ethical and societal aspects of their work. We find that interacting with the database helps students better understand the magnitude and severity of AI harms and instills in them a sense of urgency around (a) designing functional and safe AI and (b) strengthening governance and accountability mechanisms. Finally, we compile students' feedback about the tool and our class activity into actionable recommendations for the database development team and the broader community to improve awareness of AI harms in AI ethics education.Comment: 37 pages, 11 figures; To appear in the proceedings of EAAMO 202

    Soundify: Matching Sound Effects to Video

    Full text link
    In the art of video editing, sound helps add character to an object and immerse the viewer within a space. Through formative interviews with professional editors (N=10), we found that the task of adding sounds to video can be challenging. This paper presents Soundify, a system that assists editors in matching sounds to video. Given a video, Soundify identifies matching sounds, synchronizes the sounds to the video, and dynamically adjusts panning and volume to create spatial audio. In a human evaluation study (N=889), we show that Soundify is capable of matching sounds to video out-of-the-box for a diverse range of audio categories. In a within-subjects expert study (N=12), we demonstrate the usefulness of Soundify in helping video editors match sounds to video with lighter workload, reduced task completion time, and improved usability.Comment: Full paper in UIST 2023; Short paper in NeurIPS 2021 ML4CD Workshop; Online demo: http://soundify.c

    VideoMap: Video Editing in Latent Space

    Full text link
    Video has become a dominant form of media. However, video editing interfaces have remained largely unchanged over the past two decades. Such interfaces typically consist of a grid-like asset management panel and a linear editing timeline. When working with a large number of video clips, it can be difficult to sort through them all and identify patterns within (e.g. opportunities for smooth transitions and storytelling). In this work, we imagine a new paradigm for video editing by mapping videos into a 2D latent space and building a proof-of-concept interface.Comment: Accepted to NeurIPS 2022 Workshop on Machine Learning for Creativity and Design. Website: https://chuanenlin.com/videoma

    Videogenic: Video Highlights via Photogenic Moments

    Full text link
    This paper investigates the challenge of extracting highlight moments from videos. To perform this task, a system needs to understand what constitutes a highlight for arbitrary video domains while at the same time being able to scale across different domains. Our key insight is that photographs taken by photographers tend to capture the most remarkable or photogenic moments of an activity. Drawing on this insight, we present Videogenic, a system capable of creating domain-specific highlight videos for a wide range of domains. In a human evaluation study (N=50), we show that a high-quality photograph collection combined with CLIP-based retrieval (which uses a neural network with semantic knowledge of images) can serve as an excellent prior for finding video highlights. In a within-subjects expert study (N=12), we demonstrate the usefulness of Videogenic in helping video editors create highlight videos with lighter workload, shorter task completion time, and better usability.Comment: Accepted to NeurIPS 2022 Workshop on Machine Learning for Creativity and Design. Website: https://chuanenlin.com/videogeni

    Pulling Back the Curtain on the Wizards of Oz

    Get PDF
    The Wizard of Oz method is an increasingly common practice in HCI and CSCW studies as part of iterative design processes for interactive systems. Instead of designing a fully-fledged system, the ‘technical work’ of key system components is completed by human operators yet presented to study participants as if computed by a machine. However, little is known about how Wizard of Oz studies are interactionally and collaboratively achieved in situ by researchers and participants. By adopting an ethnomethodological perspective, we analyse our use of the method in studies with a voice-controlled vacuum robot and two researchers present. We present data that reveals how such studies are organised and presented to participants and unpack the coordinated orchestration work that unfolds ‘behind the scenes’ to complete the study. We examine how the researchers attend to participant requests and technical breakdowns, and discuss the performative, collaborative, and methodological nature of their work. We conclude by offering insights from our application of the approach to others in the HCI and CSCW communities for using the method

    Team Learning as a Lens for Designing Human-AI Co-Creative Systems

    Full text link
    Generative, ML-driven interactive systems have the potential to change how people interact with computers in creative processes - turning tools into co-creators. However, it is still unclear how we might achieve effective human-AI collaboration in open-ended task domains. There are several known challenges around communication in the interaction with ML-driven systems. An overlooked aspect in the design of co-creative systems is how users can be better supported in learning to collaborate with such systems. Here we reframe human-AI collaboration as a learning problem: Inspired by research on team learning, we hypothesize that similar learning strategies that apply to human-human teams might also increase the collaboration effectiveness and quality of humans working with co-creative generative systems. In this position paper, we aim to promote team learning as a lens for designing more effective co-creative human-AI collaboration and emphasize collaboration process quality as a goal for co-creative systems. Furthermore, we outline a preliminary schematic framework for embedding team learning support in co-creative AI systems. We conclude by proposing a research agenda and posing open questions for further study on supporting people in learning to collaborate with generative AI systems.Comment: ACM CHI 2022 Workshop on Generative AI and HC

    Setting the Stage with Metaphors for Interaction - Researching Methodological Approaches for Interaction Design of Autonomous Vehicles

    No full text
    Development of autonomous vehicles is progressing. As automation levels increase, the roles of both the driver and the vehicle are changing, meaning that they need to forge a new relationship to each other as the vehicle gains more agency. We believe this requires approaches that address that relationship early in the design process. One such approach is choosing a metaphor as a guiding principle for the interaction to set the preconditions for the relationship. Another approach is early evaluation of designs between system concept prototypes and the user. The aim of this one-day workshop is to explore the use of metaphors and evaluation though enactment in the design of human-vehicle interaction. This will be done through a short concept development process, where participants are asked to reflect on the process. Outcomes will be an evolved understanding of using the design approaches, as well as identified collaboration and research needs

    A video-based automated driving simulator for automotive UI prototyping, UX and behaviour research

    No full text
    The lack of automated cars above SAE level 3 raises challenges for conducting User Experience Design (UXD) and behaviour research for automated driving. User-centred methods are critical to ensuring a human-friendly progress of vehicle automation. This work introduces the Immersive Video-based Automated Driving (IVAD) Simulator. It uses carefully recorded 180/360° videos that are played back in a driving simulator. This provides immersive driving experiences in visually realistic and familiar environments. This paper reports learnings from an iterative development of IVAD, and findings of two user studies: One simulator study (N=15) focused on the immersive experience; and one VR study (N=16) focused on rapid prototyping and the evaluation of Augmented Reality (AR) concepts. Overall, we found the method to be a useful, versatile and low budget UXD tool with a high level of immersion that is uniquely aided by the familiarity of the environment. IVAD's limitations and future improvements are discussed in relation to research applications within AutoUI
    corecore